Decorate the Newcomers: Visual Domain Prompt for Continual Test Time Adaptation
نویسندگان
چکیده
Continual Test-Time Adaptation (CTTA) aims to adapt the source model continually changing unlabeled target domains without access data. Existing methods mainly focus on model-based adaptation in a self-training manner, such as predicting pseudo labels for new domain datasets. Since are noisy and unreliable, these suffer from catastrophic forgetting error accumulation when dealing with dynamic data distributions. Motivated by prompt learning NLP, this paper, we propose learn an image-layer visual while having parameters frozen. During testing, datasets can be adapted reformulating input learned prompts. Specifically, devise two types of prompts, i.e., domains-specific prompts domains-agnostic extract current knowledge maintain domain-shared continual adaptation. Furthermore, design homeostasis-based strategy suppress domain-sensitive domain-invariant more effectively. This transition model-dependent paradigm model-free one enables us bypass problems. Experiments show that our proposed method achieves significant performance gains over state-of-the-art four widely-used benchmarks, including CIFAR-10C, CIFAR-100C, ImageNet-C, VLCS
منابع مشابه
Sample-oriented Domain Adaptation for Image Classification
Image processing is a method to perform some operations on an image, in order to get an enhanced image or to extract some useful information from it. The conventional image processing algorithms cannot perform well in scenarios where the training images (source domain) that are used to learn the model have a different distribution with test images (target domain). Also, many real world applicat...
متن کاملImage Analogies for Visual Domain Adaptation
In usual approaches to visual domain adaptation, algorithms are used to infer a common “domain transform” between the feature space of a source domain and that of a target domain. We propose a new approach in which we use “image analogies” to infer a domain transform that can be applied to images rather than features. This approach is applicable for domain pairs in which the domain transform is...
متن کاملSelf-ensembling for visual domain adaptation
This paper explores the use of self-ensembling for visual domain adaptation problems. Our technique is derived from the mean teacher variant [29] of temporal ensembling [14], a technique that achieved state of the art results in the area of semi-supervised learning. We introduce a number of modifications to their approach for challenging domain adaptation scenarios and evaluate its effectivenes...
متن کاملMax-margin transforms for visual domain adaptation
We present a new algorithm for training linear support vector machine classifiers across image domains. To compensate for statistical differences between domains, our algorithm learns a linear transformation that maps points from the target (test) domain to the source (training) domain as part of training the classifier. We optimize both the transformation and classifier parameters jointly, and...
متن کاملDistribution-Matching Embedding for Visual Domain Adaptation
Domain-invariant representations are key to addressing the domain shift problem where the training and test examples follow different distributions. Existing techniques that have attempted to match the distributions of the source and target domains typically compare these distributions in the original feature space. This space, however, may not be directly suitable for such a comparison, since ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence
سال: 2023
ISSN: ['2159-5399', '2374-3468']
DOI: https://doi.org/10.1609/aaai.v37i6.25922